Private Predictions with Syft Keras

Step 3: Private Prediction using Syft Keras - Serving (Client)

Congratulations! After training your model with normal Keras and securing it with Syft Keras, you are ready to request some private predictions.


In [1]:
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist

import syft as sy

Data

Here, we preprocess our MNIST data. This is identical to how we preprocessed during training.


In [2]:
# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

Connect to model

Before querying the model, you just have to connect to it. To do so, you can create a client. Then define the exact same three TFEWorkers (alice, bob, and carol) and cluster. Finally call connect_to_model. This creates a TFE queueing server on the client side that connects to the queueing server set up by model.serve() in Part 13b. The queue will be responsible for secret sharing the plaintext data before submitting the shares in a prediction request.


In [3]:
num_classes = 10
input_shape = (1, 28, 28, 1)
output_shape = (1, num_classes)

In [4]:
client = sy.TFEWorker()

alice = sy.TFEWorker(host='localhost:4000')
bob = sy.TFEWorker(host='localhost:4001')
carol = sy.TFEWorker(host='localhost:4002')
cluster = sy.TFECluster(alice, bob, carol)

client.connect_to_model(input_shape, output_shape, cluster)


INFO:tf_encrypted:Starting session on target 'grpc://localhost:4000' using config graph_options {
}

Query model

You are ready to get some private predictions! Calling query_model will insert the image into the queue created above, secret share the data locally, and submit the shares to the model server in Part 13b.


In [5]:
# User inputs
num_tests = 3
images, expected_labels = x_test[:num_tests], y_test[:num_tests]

In [6]:
for image, expected_label in zip(images, expected_labels):

    res = client.query_model(image.reshape(1, 28, 28, 1))
    predicted_label = np.argmax(res)

    print("The image had label {} and was {} classified as {}".format(
        expected_label,
        "correctly" if expected_label == predicted_label else "wrongly",
        predicted_label))


The image had label 7 and was correctly classified as 7
The image had label 2 and was correctly classified as 2
The image had label 1 and was correctly classified as 1

This is great. You are able to classify these three images correctly! But what's special about these predictions is that you haven't revealed any private information to get this service. The model host never saw your input data or your predictions, and you never downloaded the model. You were able to get private predictions on encrypted data with an encrypted model!

Before we rush off to apply this in our own apps, let's quickly go back to Part 13b to clean up our served model!